Gabrielle Sierra - Editorial Director and Producer
Molly McAnany - Associate Podcast Producer
Markus Zakaria - Audio Producer and Sound Designer
-
Kevin WeilChief Product Officer of OpenAI and member at the Council on Foreign Relations
-
Allison OkamuraProfessor of Mechanical Engineering at Stanford University and Senior Fellow at the Hoover Institution
Transcript
Martin GILES: Welcome to The Interconnect, a new podcast series from the Council on Foreign Relations and the Stanford Emerging Technology Review. Each episode brings together experts from critical fields of emerging technology to explore recent groundbreaking developments, what's coming over the horizon, and how the implications for American innovation leadership interconnect with the fast-changing geopolitical environment. I'm Martin Giles, and I'm the Managing Editor of the Stanford Emerging Technology Review. In this episode, we’ll be focusing on robotics. Robots have the potential to reshape entire industries, enhance our daily lives, and transform the geopolitical landscape. And advances in artificial intelligence promise a new era of even smarter machines that can handle all kinds of complex tasks.
Allison OKAMURA: Ultimately, I think it’s going to be that we don’t have general corpus robots that do everything but we are going to compartmentalize them at least to some extent so that we can ensure that we have a reasonable amount of safety.
Kevin WEIL: AI is already, for hundreds of millions of people around the world, a daily fixture that's helping them solve problems from the mundane like, "I need to write this email to my boss,” all the way to critical problems. It’s meaningful for national security.
GILES: Joining me today are Allison Okamura, a professor of mechanical engineering at Stanford University and a Science Fellow at the Hoover Institution who is an expert in robotics, and Kevin Weil, the Chief Product Officer of OpenAI and a member at the Council on Foreign Relations. Thank you both for being with us today. So, let's dive straight in here. Allison, what are some of the most exciting developments you are seeing in robotics at the moment?
OKAMURA: Well, one which is very relevant to this podcast is the integration of new forms of AI with the physical body of a robot. So this idea that embodied intelligence where AI is no longer only for information tasks, but actually for tasks that can be achieved in the physical world. And on my side, because I'm a mechanical engineer and I'm very interested in the physical body of the robot, I think an especially interesting development are novel soft bodies of robots that can be safer, more dexterous, more adaptable to the environment. And these AI technologies and machine learning are both enabling those types of soft robots to still perform complex, dexterous tasks even if the robots themselves are a bit floppy.
GILES: Can you give us a couple of examples of what a soft robot is and does?
OKAMURA: Yeah, so soft robots are not yet something that you really see much on the market, but they're developing technologies in universities and small startup companies where, for example, let's say that you need a robot that can go inside the human body in order to perform a surgical task. If you do this through conventional surgical instruments, they're rigid, sort of long, skinny instruments. But imagine that you need to get them into a difficult-to-reach place, if they can be soft and deformable and yet still become rigid when necessary in order to apply forces, then you can actually accomplish tasks that might not be accessible otherwise.
GILES: Got it. Kevin, let's switch to you. OpenAI has been driving a lot of the momentum in AI innovation with the development of multiple iterations of ChatGPT and other models. And ChatGPT takes vast amounts of data to train a model that then kind of outputs almost human-like responses to queries. What are some of the things that are exciting you out there right now?
WEIL: I have a couple. One is people are used to just typing to ChatGPT, that being the way that you interact with these AI models, but we want you to be able to interact with them in every way that you can interact with another human. So you and I can type to each other, but we can also talk to each other like we're doing now. We can see each other like we can right now. So we want models to be able to speak and to listen, to be able to take in video and understand the real world in real time and react accordingly. Something that's obviously very valuable in all kinds of robotics scenarios. And so that's one. The other is models are just at the cusp of being able to reason for the first time. So when you think about GPT-2, 3, 4, 5, et cetera, they're trained on a huge amount of data, but they kind of do system one level thinking. So you ask them a question, they give you an answer, you ask them a question, they give you an answer. Most models don't get a reason. So we had a breakthrough in our research over the past year or so, and we just launched this new model called o1 that actually can reason. So you give it a question, it doesn't just immediately give you an answer. It thinks, and it generates hypotheses based on what it knows. Sometimes it'll refute those hypotheses. Other times it'll affirm them and then it'll continue to reason from there, which is when you think about it, how we actually make scientific progress as people or how we answer any hard question. So the fact that models are just starting to be able to do that, super exciting.
GILES: When you say it thinks, what exactly do you mean it thinks? Is it kind of just churning through more data or is there something else going on?
WEIL: Well, so think about if I gave you a Sudoku or one of those New York Times connections puzzles to do where you've got a group 16 words into four groups of four. You would start by forming hypotheses as you went. You'd read that list of words and you'd go, "Okay, I think those two are similar. Alright, is there a third that's similar? No, not really. It can't be that, must be something else, right?" And as you go, you're forming hypotheses, then refuting some of those hypotheses. Other ones you realize, “Oh, that one's got legs. Okay, I'm going to keep trying on that.” And the model is learning how to do that. And that’s how it makes progress. So, it’s really exciting to see models beginning to do that. Also, super relevant to robotics.
GILES: Right. So Allison, let's come back to you. You brought up AI right at the beginning. We’ve talked about ChatGPT and there are plenty of other capable models emerging, like DeepSeek from China and Amazon’s Nova. How is AI going to transform robotics and will it transform every robot?
OKAMURA: That's a great question—and also reflecting on what Kevin said—so in the early days of AI, it was very rule-based, right? In some ways it reflected some of what you were saying, Kevin, about the hypothesis-driven aspect of ‘you test something and you make choices.’ But if you are only doing it that way and you're not learning, then you just have a very limited set of actions that you can do. And so early robots with AI could not handle unpredictable situations because they were based on a set of rules and often the world would not match those rules, and so you wouldn't be able to go anywhere from there. What is happening now is that, well, we are very interested, for example, let's say in foundation models for robots, how do we collect lots of data about different types of scenarios, let's say manipulation tasks. And from that you can learn policies that can adapt to a wide variety of different activities. Now, I don't think yet in robots that we have implemented the type of models that OpenAI has. But I think that will be a critical step to be kind of like a bridge between the old-fashioned, rule-based type of AI that has been implemented on robots for a very long time, and the current foundation model machine learning approaches which are being implemented on robots, but we're finding it difficult, I guess, to build on them in practical manipulation scenarios.
GILES: Got it. So we're kind of going from R2D2 to R2GPT-2. Will we see the day, both of you, when AI and robotics combine to create smart machines that are as capable as a human being, do you think?
WEIL: I'll defer to Alison on the robotic side. She knows far more than I do, but I'd say it's hard to imagine that that isn't part of our future. The models are getting smarter. You've got people like Allison pushing the state-of-the-art on robotics, and you're going to combine these two things together, and I think that's exactly what we're going to see.
GILES: Allison?
OKAMURA: Yeah. So when you say whether a machine can do something as well as a human, it automatically brings to mind the Turing test, which was famously a way to describe whether or not an artificially intelligent system could be indistinguishable from a human. And that's on the intelligence side. But then on the physical side, someday, yes, but we still have a long way to go in terms of the physical robots. And I think our, I don't know, humans’ bar is just really high. There are studies showing, for example, that if a person has a very slight, minor injury and they walk just a little bit differently, everyone who knows that person will notice that they're walking a little bit funny. And now you imagine what you see today in terms of humanoid robots and how far off they are physically from being able to achieve the same kind of locomotion tasks as humans. We still have a long way to go because we have yet to be able to recreate the kind of power and flexibility of the musculature of the human body. And it is where we would like to go, but I think it's going to be a long time. The intelligence, Kevin, do you think we're there? We've probably blown the Turing test out of the water, right?
WEIL: I mean, I agree with you, especially when it comes to robotics and interaction in the physical world, I think we're still a number of years away, but I have a hard time imagining it's not part of the future. The Turing test as an example, it was the gold standard of AI for the longest time, and we just whooshed past it and now nobody talks about it anymore.
GILES: And the Turing test—so that’s the ability for a machine to exhibit intelligent behavior that’s kind of equivalent to a human? We just went past that?
WEIL: I mean, not indistinguishable, but we're just used to a world where computers can now do a bunch of things that they could never do before. And in much more general purpose ways, I mean, calculators have been able to do computation better than humans for a long time, but they were very narrow. Now you've got these more general purpose things. A friend of mine was telling me the story about their first time riding a Waymo. If you've ever ridden a Waymo in San Francisco, a driverless car from Google. It's awesome, you should do it. But the first 10 seconds of riding the Waymo, there were hands gripped on the doors like, "Oh my God, watch out for that bicycle." And then five minutes into riding a Waymo for the first time, they were relaxed and just looking around like, "Oh my God, this is the future. I am living in the future." And then 15 minutes into riding a Waymo for the first time, they were bored, scrolling on their phone. Immediately this thing that had been science fiction to them became not a thing worth even thinking about. I mean, I do think that there will be some aspect of that with robotics where at first it's not quite good enough and then very quickly we're just used to it being a part of our lives all the time.
OKAMURA: I think it will depend on what people's expectations are, right? So in computer graphics, you can either try to make hyper realistic computer graphics that look like real people, simulate water correctly, and all of these things that are computationally very challenging, but sometimes the right approach is to make things kind of cartoonish. You purposely don't go for hyperrealism. So I think robots and people's acceptance of them will be much better served if we're not actually trying to make a robot that looks and acts just like a human. Because I don't really think our goal is to replace humans in terms of all of our social aspects in addition to the physical ones. So I think if we are not trying to exactly imitate a human, but yet doing useful things in the world that everybody wants, like doing the dishes and laundry and things like that. I don't think we're going to want them to look like humans. So that might be a bit of asking the wrong question or saving the wrong goal if we want it to say, "Yeah, it just does what humans do."
WEIL: I'm just curious for your perspective, Allison. You mentioned not thinking that humanoid was the right place to go right away, but when you think about things like doing laundry or doing dishes or whatever, so much of our lives are built around the form factor that we have. And so you can start building up from first principles, "Well, if it's going to be able to do the dishes, it needs to be able to reach into this drawer or cabinet or whatever." And so what forms do you think work for those kinds of everyday things and yet are not humanoid?
OKAMURA: That's a great question. So I guess it depends on what it is. What is the task? What are you trying to do? And it begs the question: is a dishwasher itself not a robot? Is a washer, a dryer, not a robot? And I think we don't think of them as robots because they're fairly single purpose. So it is true that for most people, what they imagine and what they would want out of a robot is that it can do multiple different types of tasks. At the same time, just like not one human does all tasks,right? You have a certain specialization and you want that, otherwise you'll have some direct conflict. You might have some type of task that it's really good to have a small body and a small form factor. And another one, you want someone tall to reach up and get the dishes off of the top shelf. So I think this is just going to be a real balance, and it's trying to understand what consumers want in this space and combined with what people will accept. I don't know exactly where it'll be, but there's going to be this spectrum between your dishwasher and then an all-purpose humanoid that does everything from tutor your kid to empty the dishwasher. So we're going to be somewhere along that spectrum, and I don't really know where the market is going to land us.
GILES: I mean, one thing we do want to be sure of though is that the robots that we interact with aren't biased in some ways towards certain actions that might harm us or that they are kind of responsible in terms of safety. How do you both think as technologists about the responsibility you bear when you build both robotics and AI and the fusion that's coming in terms of those issues?
WEIL: I mean, it's obviously a super relevant question when it comes to an AI model because you can ask an AI model anything. So if you're building a normal computer program, you have a UI, there's sort of a limited set of ways that you can provide input to these things. And in some sense, you constrain your problem. You don't have that with AI. You can write anything to these models. And so we have to try and think through a pretty broad range of ways that they could be misused or could hallucinate or could otherwise produce outputs that we don't want them doing.
GILES: By hallucination you mean they just imagine responses that aren't correct, but they sound convincing. That's what you mean by hallucinate?
WEIL: Yeah because in some deep sense they're statistical, and so you end up sometimes getting statistically correct answers that are in practice in the particular use case incorrect. So one of the ways that we deal with this is we publish a model spec, which it's out there. It's on the web. You can look it up—OpenAI model spec. That says, "Here's how we want our models to respond to a huge variety of different situations." You could ask it about politics and probably nobody wants the model to have its own politics and be recommending things. So you want it to sort of demure in those cases. Well, what happens if you ask it if the earth is flat? So you go from politics which is maybe 50/50, and you don't want the model to be expressing aside, to if 99.9 percent of people believe that the earth is not flat, should the model take a stance on that? And all kinds of other things that you can imagine that are difficult questions that you need to think through. And so with the model spec, we try and say, "This is what we want the model to respond. This is how it should respond to these kinds of questions." And then you have how the model actually responds. And so if you get a response that doesn't feel right, it can either be because the spec was right and in practice the model deviated from it, which we can learn from and address, or you disagree with the spec, in which case because it's public, we can have an open discussion about that as a society if we need to. And so those are the ways, that's how we think about trying to answer these tough questions about how a model should behave. And we'll have to continue to evolve that as we get towards more physical forms in robotics.
GILES: Allison, how do you think about when you build a robot? There's obviously a lot of things coming together including the operating system.
OKAMURA: Yeah. So I think I sort of have to distinguish between what we're doing in task and application specific robot scenarios versus sort of bigger idea of robots that use machine intelligence in order to do any task. Because today's robots are very much task specific. So let's take a surgical robot for example. Most surgical robots that are used in practice today are not... You wouldn't think of them as intelligent in any way. They are tele-operated, for example, by a human surgeon. And for scenarios where they don't interact directly with tissue, they can use planning in order to, for example, shoot radiation beams through a part of the body in order to overlap radiation over a place where you need to kill the most cancer. And there'll be a plan that's set up in advance, which usually would get reviewed by an oncologist or radiologist before you hit go. And so there's all kinds of existing safeguards in these medical robots, whether it's a human directly in the loop or a human approving a plan before it's executed. But then what we're really curious about and maybe a little worried about is what about the next generation where you're using these statistical models that are sometimes going to be wrong, and how do you make sure that given that your intelligence now has a physical presence that could potentially cause direct immediate physical harm to a person or an environment, how do you prevent that? There's no really easy answer there. But for example, you could say, "We're just going to limit the forces. We're not going to let this robot apply forces larger than X because that should, under a set of scenarios, prevent any harm that could come to a person." But that also means that robot can't maybe lift a car off of a person who's trapped underneath, and that robot would not be able to do some maybe really good, strong things that would be needed. So ultimately, I think it's going to be that we don't have general purpose robots that do everything, but we are going to compartmentalize them, at least to some extent on the spectrum I was talking about, so that we can ensure that we have a reasonable amount of safety.
GILES: I'd love to keep talking about this, it's such a fascinating subject, but let's zoom out now and look at some broader policy related issues. I'd like to look at where America sits right now in terms of use of robotics versus the rest of the world. If you look at the adoption of robots in manufacturing, we have about, I think, 285 robots per 10,000 employees in America, and that puts us about 10th in the world behind countries like Germany, China, South Korea. What do you make of that, Allison? Should we be worried? And what's driving our innovation kind of engine here?
OKAMURA: So the numbers that you described are really primarily driven by manufacturing robots, not Roombas that are vacuum cleaning in our homes and such, but rather robots that are assembling cars. And so I think a lot of that reflects how manufacturing is done and where it's done and what kind of jobs people in the U.S. want. I think in terms of innovation, my perspective is that some of it is about where the funding comes from, and also it's about culture separate from the manufacturing side. So in the U.S. a lot of the funding for robotics is coming from the military industrial complex and driving towards specific applications and security and military. There is government funding and some company funding that can be used for really creative projects. But I feel like in the U.S., people are a little more on the practical side with robots. Whereas in Europe I see more funding for maybe not so practical, but projects that maybe will improve acceptance of robots by people. So for example, things related to art and creativity and novel experiences. And then you take countries like Japan, which is quite willing to push the envelope in terms of what their robots will do. And that comes from an example where in Japan they have very little immigration and a very much growing older population. And so who is going to take the jobs of what frankly would be some of the less attractive jobs that we were having trouble bringing people into. If you don't have new people coming into the country to take some of those jobs, if you don't have a young population to replenish, then robots are the solution. So I think in Japan, they're more willing to take risks on robots doing things with people that in the U.S. we're not willing to do.
GILES: I always remember in Japan, you have robots in manga magazines and in comics. They're always presented as kind of positive things, whereas in Hollywood, they're often like killer robots or you know that something bad is going to happen when you see a robot. Not always, but-
OKAMUR: I know, very much so.
GILES: Kevin, in terms of AI adoption, how do you think America is doing versus the rest of the world?
WEIL: I think on the whole very good. We were just talking about Japan and cultural amenity to robots. You see similar things with AI in Japan, actually. There's this sense of the way it's portrayed, let's just get to the Jetsons faster. And so it's a culture that's also very AI positive and sees the benefits that AI can bring. But on the whole, I think that the U.S. is in a similar place. There are more conversations here about AI safety and those are good conversations. But on the whole, AI is already for hundreds of millions of people around the world a daily fixture that's helping them solve problems from the mundane like, "I need to write this email to my boss." All the way to critical problems. It's meaningful for national security. So I'm very optimistic at the rate that AI is taking off in the U.S. and around the world. I don't think that's going to change because the models are going to get smarter, they're going to get faster, they're going to get cheaper, they're going to get more useful across the board. So I think this is only really going in one direction.
GILES: Allison, I just want to ask you what you think the next administration should be doing? If you had a roadmap with the three top things you think it should be doing to promote robotics?
OKAMURA: Well, what any academic would say, more funding, but putting that aside, I do think it's been interesting that although there's a Robotics Caucus in Congress, there's an Office of Science Technology and Policy, we don't see a lot of engineers engaging with policymakers. Many people historically joining OSTP have been more on the peer science side. So I would love to see more engineers invited into the conversation.
GILES: So just to be clear, OSTP that stands for the Office of Science and Technology Policy.
OKAMURA: Yes.
GILES: And you'd like to see more engineers invited in. What else would you like to see?
OKAMURA: I would say enabling more creativity in the robotics field by having, it's not just more funding, but funding that allows people to explore new ideas that aren't so directed towards practical applications, say military or manufacturing. Thinking more about how robots can actually enhance people's daily lives. And I already think the government has done quite well with medical robotics in terms of not over-regulating, but having the right barriers and hoops in place to ensure that the systems are safe. And that's been really great for building up public trust in medical robotics.
GILES: Perfect. With that we’re getting towards the end of the show but we have about a minute left. Time for a lightning round of questions. Very brief. Number one, what is your favorite book or movie about robots and why? Kevin?
WEIL Oh, gosh. Favorite movie or book about robots? I'll go with Asimov, Foundation.
OKAMURA: I would say Wall-E.
GILES: Wall-E. Oh I love that one.
WEIL: Oh that's a good one. That's a good one.
GILES: What was your favorite subject at school? I want to find out if you were science nerds. Allison.
OKAMURA: Physics, science nerd.
GILES: Kevin?
WEIL: Physics and math all day long.
GILES: A double win. Number three. Elon Musk has predicted that by 2040 there will be more robots on earth than humans. Agree or disagree, Kevin?
WEIL: I'll disagree, but I think he's right in the long run. Like many things, Elon is maybe over ambitious in his time frames, but usually long-term right about technology.
GILES: Got it. Allison?
OKAMURA: I have to say, agree, depending on your definition of a robot.
GILES: And finally, if you could have one robot that doesn't exist today in your daily routine, what would it be? Mine would be a dishwasher loading robot, because that would solve a few arguments in the Giles household. Kevin, what would your choice of robot be?
WEIL Oh, man. I'm going to go with a much more intelligent and versatile Roomba that can go all over the house. There's probably been a lot of Roomba innovation since I last checked, but I could use help there.
GILES: Allison?
OKAMURA: Just building on the housework theme. I'm in charge of laundry in my household and there are laundry folding robots, but nothing that I want in my house yet.
GILES: That's great. Thank you both very much for joining me today and for a terrific and super insightful conversation.
WEIL: Thank you.
OKAMURA: Thank you.
GILES: For resources used in this episode and more information, visit CFR.org/TheInterconnect and take a look at the show notes. If you ever have any questions or suggestions, connect with us at [email protected]. And to read the new 2025 Stanford Emerging Technology Review visit SETR.Stanford.edu. That's S-E-T-R.stanford.edu.
The Interconnect is a production of the Council on Foreign Relations and the Stanford Emerging Technology Review from the Hoover Institution and the Stanford School of Engineering. The opinions expressed on the show are solely that of the guests, not of CFR, which takes no institutional positions on matters of policy. Nor do they reflect the opinions of the Hoover Institution or of Stanford School of Engineering.
This episode was produced by Gabrielle Sierra, Molly McAnany, Shana Farley, Malaysia Atwater. Our audio producer is Markus Zakaria. Special thanks to our recording engineers Lori Becker and Bryan Mendives. You can subscribe to the show on Apple Podcasts, Spotify, YouTube or wherever you get your audio. For The Interconnect this is Martin Giles. Thanks for listening.
Show Notes
Robots are already a fixture in our daily lives, helping millions of people around the world perform tasks and solve problems. As these machines get smarter thanks to artificial intelligence and other advances, they have the potential to transform a wide range of industries, from healthcare to manufacturing.
In this episode of The Interconnect, Stanford University mechanical engineering professor and Hoover Institution Science Fellow Allison Okamura brings her expertise in robotics to a conversation with Kevin Weil, the Chief Product Officer of OpenAI. Together, they discuss innovations such as “soft robots,” how artificial intelligence will lead to more capable machines, and what policymakers can do to accelerate the growth of the U.S. robotics industry.
Read the 2025 Stanford Emerging Technology Review at https://setr.stanford.edu/
Podcast with Martin Giles, Luciana L. Borio and Drew Endy March 27, 2025 The Interconnect
Podcast with Martin Giles, Esther Brimmer and Simone D’Amico March 13, 2025 The Interconnect
Podcast with Martin Giles, Sebastian Elbaum and Mark Horowitz February 13, 2025 The Interconnect